Abstract
Bone identification and segmentation in X-ray images are crucial in orthopedics for the automation of clinical procedures, but it often involves some manual operations. In this work, using a modified SegNet neural network, we automatically identify and segment lower limb bone structures on radiographs presenting various fields of view and different patient orientations. A wide contextual neural network architecture is proposed to perform a high-quality pixel-wise semantic segmentation on X-ray images presenting structures with a similar appearance and strong superposition. The proposed architecture is based on the premise that every output pixel on the label map has a wide receptive field. This allows the network to capture both global and local contextual information. The overlapping between structures is handled with additional labels. The proposed approach was evaluated on a test dataset composed of 70 radiographs with entire and partial bones. We obtained an average detection rate of 98.00% and an average Dice coefficient of 95.25 ± 9.02% across all classes. For the challenging subset of images with high superposition, we obtained an average detection rate of 96.36% and an average Dice coefficient of 93.81 ± 10.03% across all classes. The results show the effectiveness of the proposed approach in segmenting and identifying lower limb bone structures and overlapping structures in radiographs with strong bone superposition and highly variable configurations, as well as in radiographs containing only small pieces of bone structures.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International journal of computer assisted radiology and surgery
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.