Abstract
Segmenting the median nerve is essential for identifying nerve entrapment syndromes, guiding surgical planning and interventions, and furthering understanding of nerve anatomy. This study aims to develop an automated tool that can assist clinicians in localizing and segmenting the median nerve from the wrist, mid-forearm, and elbow in ultrasound videos. This is the first fully automated single deep learning model for accurate segmentation of the median nerve from the wrist to the elbow in ultrasound videos, along with the computation of the cross-sectional area (CSA) of the nerve. The visual transformer architecture, which was originally proposed to detect and classify 41 classes in YouTube videos, was modified to predict the median nerve in every frame of ultrasound videos. This is achieved by modifying the bounding box sequence matching block of the visual transformer. The median nerve segmentation is a binary class prediction, and the entire bipartite matching sequence is eliminated, enabling a direct comparison of the prediction with expert annotation in a frame-by-frame fashion. Model training, validation, and testing were performed on a dataset comprising ultrasound videos collected from 100 subjects, which were partitioned into 80, ten, and ten subjects, respectively. The proposed model was compared with U-Net, U-Net++, Siam U-Net, Attention U-Net, LSTM U-Net, and Trans U-Net. The proposed transformer-based model effectively leveraged the temporal and spatial information present in ultrasound video frames and efficiently segmented the median nerve with an average dice similarity coefficient (DSC) of approximately 94% at the wrist and 84% in the entire forearm region.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.